Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract. Reactive halogen chemistry in the springtime Arctic causes ozone depletion events and alters the rate of pollution processing. There are still many uncertainties regarding this chemistry, including the multiphase recycling of halogens and how sea ice impacts the source strength of reactive bromine. Adding to these uncertainties are the impacts of a rapidly warming Arctic. We present observations from the CHACHA (CHemistry in the Arctic: Clouds, Halogens, and Aerosols) field campaign based out of Utqiaġvik, Alaska, from mid-February to mid-April of 2022 to provide information on the vertical distribution of bromine monoxide (BrO), which is a tracer for reactive bromine chemistry. Data were gathered using the Heidelberg Airborne Imaging DOAS (differential optical absorption spectroscopy) Instrument (HAIDI) on the Purdue University Airborne Laboratory for Atmospheric Research (ALAR) and employing a unique sampling technique of vertically profiling the lower atmosphere with the aircraft via “porpoising” maneuvers. Observations from HAIDI were coupled to radiative transfer model calculations to retrieve mixing ratio profiles throughout the lower atmosphere (below 1000 m), with unprecedented vertical resolution (50 m) and total information gathered (average of 17.5 degrees of freedom) for this region. A cluster analysis was used to categorize 245 retrieved BrO mixing ratio vertical profiles into four common profile shapes. We often found the highest BrO mixing ratios at the Earth's surface with a mean of nearly 30 pmol mol−1 in the lowest 50 m, indicating an important role for multiphase chemistry on the snowpack in reactive bromine production. Most lofted-BrO profiles corresponded with an aerosol profile that peaked at the same altitude (225 m above the ground), suggesting that BrO was maintained due to heterogeneous reactions on particle surfaces aloft during these profiles. A majority (11 of 15) of the identified lofted-BrO profiles occurred on a single day, 19 March 2022, over an area covering more than 24 000 km2, indicating that this was a large-scale lofted-BrO event. The clustered BrO mixing ratio profiles should be particularly useful for some MAX-DOAS (multi-axis DOAS) studies, where a priori BrO profiles and their uncertainties, used in optimal estimation inversion algorithms, are not often based on previous observations. Future MAX-DOAS studies (and past reanalyses) could rely on the profiles provided in this work to improve BrO retrievals.more » « less
-
We present a psychometric evaluation of the Cybersecurity Curriculum Assessment (CCA), completed by 193 students from seven colleges and universities. The CCA builds on our prior work developing and validating a Cybersecurity Concept Inventory (CCI), which measures students' conceptual understanding of cybersecurity after a first course in the area. The CCA deepens the conceptual complexity and technical depth expectations, assessing conceptual knowledge of students who had completed multiple courses in cybersecurity. We review our development of the CCA and present our evaluation of the instrument using Classical Test Theory and Item-Response Theory. The CCA is a difficult assessment, providing reliable measurements of student knowledge and deeper information about high-performing students.more » « less
-
We present a psychometric evaluation of a revised version of the Cybersecurity Concept Inventory (CCI) , completed by 354 students from 29 colleges and universities. The CCI is a conceptual test of understanding created to enable research on instruction quality in cybersecurity education. This work extends previous expert review and small-scale pilot testing of the CCI. Results show that the CCI aligns with a curriculum many instructors expect from an introductory cybersecurity course, and that it is a valid and reliable tool for assessing what conceptual cybersecurity knowledge students learned.more » « less
-
We explore shadow information technology (IT) at institutions of higher education through a two-tiered approach involving a detailed case study and comprehensive survey of IT professionals. In its many forms, shadow IT is the software or hardware present in a computer system or network that lies outside the typical review process of the responsible IT unit. We carry out a case study of an internally built legacy grants management system at the University of Maryland, Baltimore County that exemplifies the vulnerabilities, including cross-site scripting and SQL injection, typical of such unauthorized and ad-hoc software. We also conduct a survey of IT professionals at universities, colleges, and community colleges that reveals new and actionable information regarding the prevalence, usage patterns, types, benefits, and risks of shadow IT at their respective institutions. Further, we propose a security-based profile of shadow IT, involving a subset of elements from existing shadow IT taxonomies, which categorizes shadow IT from a security perspective. Based on this profile, survey respondents identified the predominant form of shadow IT at their institutions, revealing close similarities to findings from our case study. Through this work, we are the first to identify possible susceptibility factors associated with the occurrence of shadow IT related security incidents within academic institutions. Correlations of significance include the presence of certain graduate schools, the level of decentralization of the IT department, the types of shadow IT present, the percentage of security violations related to shadow IT, and the institution's overall attitude toward shadow IT. The combined elements of our case study, profile, and survey provide the first multifaceted view of shadow IT security at academic institutions, highlighting tension between its risks and benefits, and suggesting strategies for managing it successfully.more » « less
-
null; null; null; null (Ed.)We reflect on our ongoing journey in the educational Cybersecurity Assessment Tools (CATS) Project to create two concept inventories for cybersecurity. We identify key steps in this journey and important questions we faced. We explain the decisions we made and discuss the consequences of those decisions, highlighting what worked well and what might have gone better. The CATS Project is creating and validating two concept inventories—conceptual tests of understanding—that can be used to measure the effectiveness of various approaches to teaching and learning cybersecurity. The Cybersecurity Concept Inventory (CCI) is for students who have recently completed any first course in cybersecurity; the Cybersecurity Curriculum Assessment (CCA) is for students who have recently completed an undergraduate major or track in cybersecurity. Each assessment tool comprises 25 multiple-choice questions (MCQs) of various difficulties that target the same five core concepts, but the CCA assumes greater technical background. Key steps include defining project scope, identifying the core concepts, uncovering student misconceptions, creating scenarios, drafting question stems, developing distractor answer choices, generating educational materials, performing expert reviews, recruiting student subjects, organizing workshops, building community acceptance, forming a team and nurturing collaboration, adopting tools, and obtaining and using funding. Creating effective MCQs is difficult and time-consuming, and cybersecurity presents special challenges. Because cybersecurity issues are often subtle, where the adversarial model and details matter greatly, it is challenging to construct MCQs for which there is exactly one best but non-obvious answer. We hope that our experiences and lessons learned may help others create more effective concept inventories and assessments in STEM.more » « less
-
We reflect on our ongoing journey in the educational Cybersecurity Assessment Tools (CATS) Project to create two concept inventories for cybersecurity. We identify key steps in this journey and important questions we faced. We explain the decisions we made and discuss the consequences of those decisions, highlighting what worked well and what might have gone better. The CATS Project is creating and validating two concept inventories—conceptual tests of understanding—that can be used to measure the effectiveness of various approaches to teaching and learning cybersecurity. The Cybersecurity Concept Inventory (CCI) is for students who have recently completed any first course in cybersecurity; the Cybersecurity Curriculum Assessment (CCA) is for students who have recently completed an undergraduate major or track in cybersecurity. Each assessment tool comprises 25 multiple-choice questions (MCQs) of various difficulties that target the same five core concepts, but the CCA assumes greater technical background. Key steps include defining project scope, identifying the core concepts, uncovering student misconceptions, creating scenarios, drafting question stems, developing distractor answer choices, generating educational materials, performing expert reviews, recruiting student subjects, organizing workshops, building community acceptance, forming a team and nurturing collaboration, adopting tools, and obtaining and using funding. Creating effective MCQs is difficult and time-consuming, and cybersecurity presents special challenges. Because cybersecurity issues are often subtle, where the adversarial model and details matter greatly, it is challenging to construct MCQs for which there is exactly one best but non-obvious answer. We hope that our experiences and lessons learned may help others create more effective concept inventories and assessments in STEM.more » « less
-
We analyze expert review and student performance data to evaluate the validity of the Cybersecurity Concept Inventory (CCI) for assessing student knowledge of core cybersecurity concepts after a first course on the topic. A panel of 12 experts in cybersecurity reviewed the CCI, and 142 students from six different institutions took the CCI as a pilot test. The panel reviewed each item of the CCI and the overwhelming majority rated every item as measuring appropriate cybersecurity knowledge. We administered the CCI to students taking a first cybersecurity course either online or proctored by the course instructor. We applied classical test theory to evaluate the quality of the CCI. This evaluation showed that the CCI is sufficiently reliable for measuring student knowledge of cybersecurity and that the CCI may be too difficult as a whole. We describe the results of the expert review and the pilot test and provide recommendations for the continued improvement of the CCI.more » « less
-
null (Ed.)We reflect on our ongoing journey in the educational Cybersecurity Assessment Tools (CATS) Project to create two concept inventories for cybersecurity. We identify key steps in this journey and important questions we faced. We explain the decisions we made and discuss the consequences of those decisions, highlighting what worked well and what might have gone better. The CATS Project is creating and validating two concept inventories---conceptual tests of understanding---that can be used to measure the effectiveness of various approaches to teaching and learning cybersecurity. The Cybersecurity Concept Inventory (CCI) is for students who have recently completed any first course in cybersecurity; the Cybersecurity Curriculum Assessment (CCA) is for students who have recently completed an undergraduate major or track in cybersecurity. Each assessment tool comprises 25 multiple-choice questions (MCQs) of various difficulties that target the same five core concepts, but the CCA assumes greater technical background.more » « less
-
We analyze expert review and student performance data to evaluate the validity of the cci for assessing student knowledge of core cybersecurity concepts after a first course on the topic. A panel of 12 experts in cybersecurity reviewed the cci, and 142 students from six different institutions took the cci as a pilot test. The panel reviewed each item of the cci and the overwhelming majority rated every item as measuring appropriate cybersecurity knowledge. We administered the cci to students taking a first cybersecurity course either online or proctored by the course instructor. We applied classical test theory to evaluate the quality of the cci. This evaluation showed that the cci is sufficiently reliable for measuring student knowledge of cybersecurity and that the cci may be too difficult as a whole. We describe the results of the expert review and the pilot test and provide recommendations for the continued improvement of the cci.more » « less
An official website of the United States government

Full Text Available